Training Discriminative HMM by Optimal Allocation of Gaussian Kernels

نویسندگان

  • Zhijie Yan
  • Peng Liu
  • Jun Du
  • Frank Soong
  • Renhua Wang
چکیده

We propose to train Hidden Markov Model (HMM) by allocating Gaussian kernels non-uniformly across states so as to optimize a selected discriminative training criterion. The optimal kernel allocation problem is first formulated based upon a non-discriminative, Maximum Likelihood (ML) criterion and then generalized to incorporate discriminative ones. An effective kernel exchange algorithm is derived and tested on TIDIGITS, a speaker-independent (man, woman, boy and girl), connected digit recognition database. Relative 46–51% word error rate reductions are obtained comparing to the conventional uniformly allocated ML baseline. The recognition performance of discriminative kernel allocation is also consistently better than the non-discriminative ML based, nonuniform kernel allocation.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Non-uniform Kernel Allocation Based Parsimonious HMM

In conventional Gaussian mixture based Hidden Markov Model (HMM), all states are usually modeled with a uniform, fixed number of Gaussian kernels. In this paper, we propose to allocate kernels nonuniformly to construct a more parsimonious HMM. Different number of Gaussian kernels are allocated to states in a non-uniform and parsimonious way so as to optimize the Minimum Description Length (MDL)...

متن کامل

Optimal Clustering and Non-uniform Allocation of Gaussian Kernels in Scalar Dimension for Hmm Compression

We propose an algorithm for optimal clustering and nonuniform allocation of Gaussian Kernels in scalar (feature) dimension to compress complex, Gaussian mixture-based, continuous density HMMs into computationally efficient, small footprint models. The symmetric Kullback-Leibler divergence (KLD) is used as the universal distortion measure and it is minimized in both kernel clustering and allocat...

متن کامل

A comparison of hybrid HMM architectures using global discriminative training

This paper presents a comparison of di erent model architectures for TIMIT phoneme recognition. The baseline is a conventional diagonal covariance Gaussian mixture HMM. This system is compared to two di erent hybrid MLP/HMMs, both adhering to the same restrictions regarding input context and output states as the Gaussian mixtures. All free parameters in the three systems are jointly optimised u...

متن کامل

Speech enhancement based on hidden Markov model using sparse code shrinkage

This paper presents a new hidden Markov model-based (HMM-based) speech enhancement framework based on the independent component analysis (ICA). We propose analytical procedures for training clean speech and noise models by the Baum re-estimation algorithm and present a Maximum a posterior (MAP) estimator based on Laplace-Gaussian (for clean speech and noise respectively) combination in the HMM ...

متن کامل

Structured Support Vector Machines for Speech Recognition

Discriminative training criteria and discriminative models are two ešective improvements for HMM-based speech recognition. is thesis proposed a structured support vector machine (SSVM) framework suitable for medium to large vocabulary continuous speech recognition. An important aspect of structured SVMs is the form of features. Several previously proposed features in the eld are summarized in ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2006